已知现代深度神经网络模型将错误地将分布式(OOD)测试数据分类为具有很高信心的分数(ID)培训课程之一。这可能会对关键安全应用产生灾难性的后果。一种流行的缓解策略是训练单独的分类器,该分类器可以在测试时间检测此类OOD样本。在大多数实际设置中,在火车时间尚不清楚OOD的示例,因此,一个关键问题是:如何使用合成OOD样品来增加ID数据以训练这样的OOD检测器?在本文中,我们为称为CNC的OOD数据增强提出了一种新颖的复合腐败技术。 CNC的主要优点之一是,除了培训集外,它不需要任何固定数据。此外,与当前的最新技术(SOTA)技术不同,CNC不需要在测试时间进行反向传播或结合,从而使我们的方法在推断时更快。我们与过去4年中主要会议的20种方法进行了广泛的比较,表明,在OOD检测准确性和推理时间方面,使用基于CNC的数据增强训练的模型都胜过SOTA。我们包括详细的事后分析,以研究我们方法成功的原因,并确定CNC样本的较高相对熵和多样性是可能的原因。我们还通过对二维数据集进行零件分解分析提供理论见解,以揭示(视觉和定量),我们的方法导致ID类别周围的边界更紧密,从而更好地检测了OOD样品。源代码链接:https://github.com/cnc-ood
translated by 谷歌翻译
Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions. Prior works on improving the logical reasoning ability of language models require complex processing of training data (e.g., aligning symbolic knowledge to text), yielding task-specific data augmentation solutions that restrict the learning of general logical reasoning skills. In this work, we propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities. We select a subset of Wikipedia, based on a set of logical inference keywords, for continued pretraining of a language model. We use two self-supervised loss functions: a modified masked language modeling loss where only specific parts-of-speech words, that would likely require more reasoning than basic language understanding, are masked, and a sentence-level classification loss that teaches the model to distinguish between entailment and contradiction types of sentences. The proposed training paradigm is both simple and independent of task formats. We demonstrate the effectiveness of APOLLO by comparing it with prior baselines on two logical reasoning datasets. APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
translated by 谷歌翻译
It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks.
translated by 谷歌翻译
In consequential decision-making applications, mitigating unwanted biases in machine learning models that yield systematic disadvantage to members of groups delineated by sensitive attributes such as race and gender is one key intervention to strive for equity. Focusing on demographic parity and equality of opportunity, in this paper we propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points. We select instances based on their influence on the fairness metric of interest, computed using an infinitesimal jackknife-based approach. The dropping of training points is done in principle, but in practice does not require the model to be refit. Crucially, we find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric. Through careful experiments, we evaluate the effectiveness of the proposed approach on diverse tasks and find that it consistently improves upon existing alternatives.
translated by 谷歌翻译
Test log-likelihood is commonly used to compare different models of the same data and different approximate inference algorithms for fitting the same probabilistic model. We present simple examples demonstrating how comparisons based on test log-likelihood can contradict comparisons according to other objectives. Specifically, our examples show that (i) conclusions about forecast accuracy based on test log-likelihood comparisons may not agree with conclusions based on other distributional quantities like means; and (ii) that approximate Bayesian inference algorithms that attain higher test log-likelihoods need not also yield more accurate posterior approximations.
translated by 谷歌翻译
Video Question Answering methods focus on commonsense reasoning and visual cognition of objects or persons and their interactions over time. Current VideoQA approaches ignore the textual information present in the video. Instead, we argue that textual information is complementary to the action and provides essential contextualisation cues to the reasoning process. To this end, we propose a novel VideoQA task that requires reading and understanding the text in the video. To explore this direction, we focus on news videos and require QA systems to comprehend and answer questions about the topics presented by combining visual and textual cues in the video. We introduce the ``NewsVideoQA'' dataset that comprises more than $8,600$ QA pairs on $3,000+$ news videos obtained from diverse news channels from around the world. We demonstrate the limitations of current Scene Text VQA and VideoQA methods and propose ways to incorporate scene text information into VideoQA methods.
translated by 谷歌翻译
医疗图像分类是图像识别领域中最关键的问题之一。该领域的主要挑战之一是缺乏标记的培训数据。此外,数据集通常会出现类不平衡,因为某些情况很少发生。结果,分类任务的准确性通常很低。特别是深度学习模型,在图像细分和分类问题上显示出令人鼓舞的结果,但它们需要很大的数据集进行培训。因此,需要从相同分布中生成更多的合成样品。先前的工作表明,特征生成更有效,并且比相应的图像生成更高。我们将此想法应用于医学成像领域。我们使用转移学习来训练针对金标准班级注释的小数据集的细分模型。我们提取了学习的功能,并使用它们使用辅助分类器GAN(ACGAN)来生成在类标签上进行调节的合成特征。我们根据其严重程度测试了下游分类任务中生成特征的质量。实验结果表明,这些生成特征的有效性及其对平衡数据和提高分类类别的准确性的总体贡献的结果有希望的结果。
translated by 谷歌翻译
知识密集型任务,例如开放域问题答案(QA),需要访问大量的世界知识或领域知识。知识密集型任务的一种常见方法是采用检索到阅读的管道,该管道首先从诸如Wikipedia之类的外部语料库中检索少数相关的上下文文档,然后预测在检索文档的条件下得到答案。在本文中,我们提出了一种新的观点,可以通过用大型语言模型生成器代替文档检索器来解决知识密集型任务。我们称我们的方法生成-Read Read(GenRead),该方法首先提示大型语言模型根据给定问题生成上下文文档,然后读取生成的文档以产生最终答案。此外,我们提出了一种基于聚类的提示方法,该方法选择了不同的提示,从而产生了涵盖不同观点的生成文档,从而更好地回忆了可接受的答案。我们对三个不同的知识密集任务进行了广泛的实验,包括开放域质量检查,事实检查和对话系统。值得注意的是,GenRead在Triviaqa和WebQ上实现了71.6和54.4的精确匹配分数,显着超过了最先进的检索到+4.0和+3.9的最先进的dpr-fid,而无需从任何外部知识源中检索任何文档。最后,我们证明可以通过结合检索和生成来进一步提高模型性能。
translated by 谷歌翻译
旨在进行巴氏杀菌和量化特定现象的任何方法都必须包括使用强大的统计方法进行数据分析。考虑到这一点,这项研究的目的是介绍非参数非均匀数据框架中可能采用的统计方法,并检查其在自然语言处理和语言集群领域的应用。此外,本文讨论了语言数据挖掘和处理中非参数方法的许多用途。数据深度思想允许在任何维度上进行中心排序,从而导致新的非参数多元统计分析,该分析不需要任何分布假设。层次结构的概念用于历史语言分类和结构化,其目的是使用相同的前提将语言组织和聚集到亚家族中。在这方面,当前的研究提出了一种基于通过各种语言的单词类型结构产生的非参数方法的语言家族结构的新方法,然后使用MDS将其转换为笛卡尔框架。这种基于统计深度的架构允许使用基于数据深度的方法来实现强大的离群检测,这对于理解各种边界语言的分类非常有用,并允许对现有分类系统进行重新评估。其他基于深度的方法也适用于无监督和监督聚类等过程。因此,本文概述了可以在非参数框架中应用于非均匀语言分类系统的过程。
translated by 谷歌翻译
传统的基于物理的建模是用于复杂非线性系统(如自动水下车辆(AUV))的控制设计中的耗时瓶颈。相比之下,纯粹的数据驱动模型虽然方便且迅速地获得,但需要大量的观察结果,并且缺乏针对安全至关重要系统的操作保证。利用可用的部分表征动态的数据驱动模型具有在典型的数据限制方案中为高价值复杂系统提供可靠的系统模型,从而避免了数月的数月昂贵的专家建模时间。在这项工作中,我们探索了专家模型和纯数据驱动建模之间的中间场。我们提出了面向控制的参数模型,具有不同水平的域意识,这些模型利用已知的系统结构和先前的物理知识来创建约束的深神经动力学系统模型。我们采用通用微分方程来构建AUV动力学的数据驱动的黑框和灰色框表示。此外,我们探索了一种混合制剂,该制剂明确模拟与不完美的灰色盒模型相关的残余误差。我们将学习模型的预测性能比较了初始条件和控制输入的不同分布的预测性能,以评估其准确性,概括和对控制的适用性。
translated by 谷歌翻译